LLM Analytics

For instructions on how to authenticate to use this endpoint, see API overview.

Endpoints

POST
POST
POST
POST
POST

Create llm analytics evaluation summary

Generate an AI-powered summary of evaluation results.

This endpoint analyzes evaluation runs and identifies patterns in passing and failing evaluations, providing actionable recommendations.

Data is fetched server-side by evaluation ID to ensure data integrity.

Use Cases:

  • Understand why evaluations are passing or failing
  • Identify systematic issues in LLM responses
  • Get recommendations for improving response quality
  • Review patterns across many evaluation runs at once

Required API key scopes

llm_analytics:write

Path parameters

  • project_id
    string

Request parameters

  • evaluation_id
    string
  • filter
    Default: all
  • generation_ids
    array
  • force_refresh
    boolean
    Default: false

Response


Example request

POST /api/environments/:project_id/llm_analytics/evaluation_summary
export POSTHOG_PERSONAL_API_KEY=[your personal api key]
curl
-H 'Content-Type: application/json'\
-H "Authorization: Bearer $POSTHOG_PERSONAL_API_KEY" \
<ph_app_host>/api/environments/:project_id/llm_analytics/evaluation_summary/\
-d evaluation_id="string"

Example response

Status 200
RESPONSE
{
"overall_assessment": "Evaluations show generally good quality with some factual accuracy issues.",
"pass_patterns": [
{
"title": "Clear Communication",
"description": "Responses consistently provided well-structured information",
"frequency": "common",
"example_generation_ids": [
"gen_abc123",
"gen_ghi789"
]
}
],
"fail_patterns": [
{
"title": "Factual Errors",
"description": "Some responses contained inaccurate information",
"frequency": "occasional",
"example_generation_ids": [
"gen_def456"
]
}
],
"na_patterns": [],
"recommendations": [
"Implement fact-checking for critical claims",
"Add source citations where applicable"
],
"statistics": {
"total_analyzed": 3,
"pass_count": 2,
"fail_count": 1,
"na_count": 0
}
}
Status 400
Status 403
Status 404
Status 500

Create llm analytics sentiment

Required API key scopes

llm_analytics:write

Path parameters

  • project_id
    string

Request parameters

  • ids
    array
  • analysis_level
    Default: trace
  • force_refresh
    boolean
    Default: false
  • date_from
    string
  • date_to
    string

Response


Example request

POST /api/environments/:project_id/llm_analytics/sentiment
export POSTHOG_PERSONAL_API_KEY=[your personal api key]
curl
-H 'Content-Type: application/json'\
-H "Authorization: Bearer $POSTHOG_PERSONAL_API_KEY" \
<ph_app_host>/api/environments/:project_id/llm_analytics/sentiment/\
-d ids="array"

Example response

Status 200
RESPONSE
{
"results": {
"property1": {
"label": "string",
"score": 0.1,
"scores": {
"property1": 0.1,
"property2": 0.1
},
"messages": {
"property1": {
"label": "string",
"score": 0.1,
"scores": {
"property1": 0.1,
"property2": 0.1
}
},
"property2": {
"label": "string",
"score": 0.1,
"scores": {
"property1": 0.1,
"property2": 0.1
}
}
},
"message_count": 0
},
"property2": {
"label": "string",
"score": 0.1,
"scores": {
"property1": 0.1,
"property2": 0.1
},
"messages": {
"property1": {
"label": "string",
"score": 0.1,
"scores": {
"property1": 0.1,
"property2": 0.1
}
},
"property2": {
"label": "string",
"score": 0.1,
"scores": {
"property1": 0.1,
"property2": 0.1
}
}
},
"message_count": 0
}
}
}
Status 400
Status 500

Create llm analytics summarization

Generate an AI-powered summary of an LLM trace or event.

This endpoint analyzes the provided trace/event, generates a line-numbered text representation, and uses an LLM to create a concise summary with line references.

Summary Format:

  • 5-10 bullet points covering main flow and key decisions
  • "Interesting Notes" section for failures, successes, or unusual patterns
  • Line references in [L45] or [L45-52] format pointing to relevant sections

Use Cases:

  • Quick understanding of complex traces
  • Identifying key events and patterns
  • Debugging with AI-assisted analysis
  • Documentation and reporting

The response includes the summary text and optional metadata.

Required API key scopes

llm_analytics:write

Path parameters

  • project_id
    string

Request parameters

  • summarize_type
  • mode
    Default: minimal
  • data
  • force_refresh
    boolean
    Default: false
  • model
    string

Response


Example request

POST /api/environments/:project_id/llm_analytics/summarization
export POSTHOG_PERSONAL_API_KEY=[your personal api key]
curl
-H 'Content-Type: application/json'\
-H "Authorization: Bearer $POSTHOG_PERSONAL_API_KEY" \
<ph_app_host>/api/environments/:project_id/llm_analytics/summarization/\
-d summarize_type=undefined,\
-d data=undefined

Example response

Status 200
RESPONSE
{
"summary": "## Summary\n- User initiated conversation with greeting [L5-8]\n- Assistant responded with friendly message [L12-15]\n\n## Interesting Notes\n- Standard greeting pattern with no errors",
"metadata": {
"text_repr_length": 450,
"model": "gpt-4.1"
}
}
Status 400
Status 403
Status 500

Create llm analytics summarization batch check

Check which traces have cached summaries available.

This endpoint allows batch checking of multiple trace IDs to see which ones have cached summaries. Returns only the traces that have cached summaries with their titles.

Use Cases:

  • Load cached summaries on session view load
  • Avoid unnecessary LLM calls for already-summarized traces
  • Display summary previews without generating new summaries

Path parameters

  • project_id
    string

Request parameters

  • trace_ids
    array
  • mode
    Default: minimal
  • model
    string

Response


Example request

POST /api/environments/:project_id/llm_analytics/summarization/batch_check
export POSTHOG_PERSONAL_API_KEY=[your personal api key]
curl
-H 'Content-Type: application/json'\
-H "Authorization: Bearer $POSTHOG_PERSONAL_API_KEY" \
<ph_app_host>/api/environments/:project_id/llm_analytics/summarization/batch_check/\
-d trace_ids="array"

Example response

Status 200
RESPONSE
{
"summaries": [
{
"trace_id": "string",
"title": "string",
"cached": true
}
]
}
Status 400
Status 403

Create llm analytics text repr

Generate a human-readable text representation of an LLM trace event.

This endpoint converts LLM analytics events ($ai_generation, $ai_span, $ai_embedding, or $ai_trace) into formatted text representations suitable for display, logging, or analysis.

Supported Event Types:

  • $ai_generation: Individual LLM API calls with input/output messages
  • $ai_span: Logical spans with state transitions
  • $ai_embedding: Embedding generation events (text input → vector)
  • $ai_trace: Full traces with hierarchical structure

Options:

  • max_length: Maximum character count (default: 2000000)
  • truncated: Enable middle-content truncation within events (default: true)
  • truncate_buffer: Characters at start/end when truncating (default: 1000)
  • include_markers: Use interactive markers vs plain text indicators (default: true)
    • Frontend: set true for <<<TRUNCATED|base64|...>>> markers
    • Backend/LLM: set false for ... (X chars truncated) ... text
  • collapsed: Show summary vs full trace tree (default: false)
  • include_hierarchy: Include tree structure for traces (default: true)
  • max_depth: Maximum depth for hierarchical rendering (default: unlimited)
  • tools_collapse_threshold: Number of tools before auto-collapsing list (default: 5)
    • Tool lists >5 items show <<<TOOLS_EXPANDABLE|...>>> marker for frontend
    • Or [+] AVAILABLE TOOLS: N for backend when include_markers: false
  • include_line_numbers: Prefix each line with line number like L001:, L010: (default: false)

Use Cases:

  • Frontend display: truncated: true, include_markers: true, include_line_numbers: true
  • Backend LLM context (summary): truncated: true, include_markers: false, collapsed: true
  • Backend LLM context (full): truncated: false

The response includes the formatted text and metadata about the rendering.

Required API key scopes

llm_analytics:write

Path parameters

  • project_id
    string

Request parameters

  • event_type
  • data
  • options

Response


Example request

POST /api/environments/:project_id/llm_analytics/text_repr
export POSTHOG_PERSONAL_API_KEY=[your personal api key]
curl
-H 'Content-Type: application/json'\
-H "Authorization: Bearer $POSTHOG_PERSONAL_API_KEY" \
<ph_app_host>/api/environments/:project_id/llm_analytics/text_repr/\
-d event_type=undefined,\
-d data=undefined

Example response

Status 200
RESPONSE
{
"text": "INPUT:\n\n[1] USER\n\nWhat is the capital of France?\n\n...",
"metadata": {
"event_type": "$ai_generation",
"event_id": "gen_123",
"rendering": "detailed",
"char_count": 150,
"truncated": false
}
}
Status 400
Status 500
Status 503

Community questions

Questions about this page? or post a community question.